Dar es Salaam
I've seen possessed children scream like beasts and strung up like puppets... these chilling exorcism cases PROVE hell is real
Devastating impact of Minneapolis shooting on Trump is worse than expected: Poll reveals America's crushing verdict... and what he must do next Bodies are STILL in wreckage of private jet that crashed in Maine on Sunday, killing six including powerful lawyer's attorney wife School principal accused of shoplifting from Walmart using'stacking' method at self-checkout Melania's shock role in Trump's showdown with Kristi Noem revealed: MARK HALPERIN's fly-on-wall account of Oval Office meeting... and who is ACTUALLY taking the fall for Alex Pretti shooting I was barely eating but kept gaining weight. Then I discovered the'taboo' cancer doctors NEVER talk about. Now sex will never be the same... don't ignore these signs Harper Beckham, 14, puts on a stylish display in a fluffy coat and vintage Chanel bag in Paris with her family - after Nicola Peltz's heartbreaking comments about sister-in-law Devastating truth about Blind Side actor Quinton Aaron: More to this'than everyone is letting on', friends reveal... as co-star Sandra Bullock'monitors' situation The wild truth about my influencer sons, their psycho dad and how lawsuits nearly left them bankrupt - by Jake and Logan Paul's MOM Trump knifes'little Napoleon' Border Patrol commander over Minnesota mayhem as he declares: 'We'll de-escalate' Lost tomb of the mysterious'cloud people' unearthed after 1,400 years in'discovery of the decade' I've seen possessed children scream like beasts and strung up like puppets... these chilling exorcism cases PROVE hell is real There is a hidden battlefield within our world, where forces of light and darkness collide, believers say, in a conflict that sometimes spills into everyday life. In its most extreme form, the clash is described as possession: a person seemingly seized by demonic beings, their body overtaken, their voice and movements warped into something not quite human. For Anglican reverend Chris Lee, 43, this is not a theological abstraction but a reality he has lived with for nearly two decades.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.25)
- North America > United States > Maine (0.24)
- North America > Canada > Alberta (0.14)
- (16 more...)
- Media > Television (1.00)
- Media > Music (1.00)
- Media > Film (1.00)
- (7 more...)
Building Capacity for Artificial Intelligence in Africa: A Cross-Country Survey of Challenges and Governance Pathways
Aryee, Jeffrey N. A., Davies, Patrick, Torsah, Godfred A., Apaw, Mercy M., Boateng, Cyril D., Mwando, Sam M., Kwisanga, Chris, Jobunga, Eric, Amekudzi, Leonard K.
Artificial intelligence (AI) is transforming education and the workforce, but access to AI learning opportunities in Africa remains uneven. With rapid demographic shifts and growing labour market pressures, AI has become a strategic development priority, making the demand for relevant skills more urgent. This study investigates how universities and industries engage in shaping AI education and workforce preparation, drawing on survey responses from five African countries (Ghana, Namibia, Rwanda, Kenya and Zambia). The findings show broad recognition of AI importance but limited evidence of consistent engagement, practical training, or equitable access to resources. Most respondents who rated the AI component of their curriculum as very relevant reported being well prepared for jobs, but financial barriers, poor infrastructure, and weak communication limit participation, especially among students and underrepresented groups. Respondents highlighted internships, industry partnerships, and targeted support mechanisms as critical enablers, alongside the need for inclusive governance frameworks. The results showed both the growing awareness of AI's potential and the structural gaps that hinder its translation into workforce capacity. Strengthening university-industry collaboration and addressing barriers of access, funding, and policy are central to ensuring that AI contributes to equitable and sustainable development across the continent.
- Africa > Ghana (0.26)
- Africa > Zambia (0.25)
- Africa > Kenya > Mombasa County > Mombasa (0.04)
- (4 more...)
- Research Report > New Finding (1.00)
- Questionnaire & Opinion Survey (1.00)
- Education > Educational Setting (0.96)
- Information Technology > Security & Privacy (0.69)
Fairness Evaluation of Large Language Models in Academic Library Reference Services
Wang, Haining, Clark, Jason, Yan, Yueru, Bradley, Star, Chen, Ruiyang, Zhang, Yiqiong, Fu, Hengyi, Tian, Zuoyu
As libraries explore large language models (LLMs) for use in virtual reference services, a key question arises: Can LLMs serve all users equitably, regardless of demographics or social status? While they offer great potential for scalable support, LLMs may also reproduce societal biases embedded in their training data, risking the integrity of libraries' commitment to equitable service. To address this concern, we evaluate whether LLMs differentiate responses across user identities by prompting six state-of-the-art LLMs to assist patrons differing in sex, race/ethnicity, and institutional role. We find no evidence of differentiation by race or ethnicity, and only minor evidence of stereotypical bias against women in one model. LLMs demonstrate nuanced accommodation of institutional roles through the use of linguistic choices related to formality, politeness, and domain-specific vocabularies, reflecting professional norms rather than discriminatory treatment. These findings suggest that current LLMs show a promising degree of readiness to support equitable and contextually appropriate communication in academic library reference services.
- Europe > Austria > Vienna (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Montana > Gallatin County > Bozeman (0.04)
- (17 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study > Negative Result (0.34)
- Health & Medicine (1.00)
- Education (1.00)
- Information Technology (0.68)
How Post-Training Reshapes LLMs: A Mechanistic View on Knowledge, Truthfulness, Refusal, and Confidence
Du, Hongzhe, Li, Weikai, Cai, Min, Saraipour, Karim, Zhang, Zimin, Lakkaraju, Himabindu, Sun, Yizhou, Zhang, Shichang
Post-training is essential for the success of large language models (LLMs), transforming pre-trained base models into more useful and aligned post-trained models. While plenty of works have studied post-training algorithms and evaluated post-training models by their outputs, it remains understudied how post-training reshapes LLMs internally. In this paper, we compare base and post-trained LLMs mechanistically from four perspectives to better understand post-training effects. Our findings across model families and datasets reveal that: (1) Post-training does not change the factual knowledge storage locations, and it adapts knowledge representations from the base model while developing new knowledge representations; (2) Both truthfulness and refusal can be represented by vectors in the hidden representation space. The truthfulness direction is highly similar between the base and post-trained model, and it is effectively transferable for interventions; (3) The refusal direction is different between the base and post-trained models, and it shows limited forward transferability; (4) Differences in confidence between the base and post-trained models cannot be attributed to entropy neurons. Our study provides insights into the fundamental mechanisms preserved and altered during post-training, facilitates downstream tasks like model steering, and could potentially benefit future research in interpretability and LLM post-training. Our code is publicly available at https://github.com/HZD01/post-training-mechanistic-analysis.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- North America > Canada > Alberta (0.14)
- Africa > Tanzania > Dar es Salaam Region > Dar es Salaam (0.04)
- (8 more...)
VLURes: Benchmarking VLM Visual and Linguistic Understanding in Low-Resource Languages
Atuhurra, Jesse, Ali, Iqra, Iwakura, Tomoya, Kamigaito, Hidetaka, Hiraoka, Tatsuya
Vision Language Models (VLMs) are pivotal for advancing perception in intelligent agents. Yet, evaluation of VLMs remains limited to predominantly English-centric benchmarks in which the image-text pairs comprise short texts. To evaluate VLM fine-grained abilities, in four languages under long-text settings, we introduce a novel multilingual benchmark VLURes featuring eight vision-and-language tasks, and a pioneering unrelatedness task, to probe the fine-grained Visual and Linguistic Understanding capabilities of VLMs across English, Japanese, and low-resource languages, Swahili, and Urdu. Our datasets, curated from web resources in the target language, encompass ten diverse image categories and rich textual context, introducing valuable vision-language resources for Swahili and Urdu. By prompting VLMs to generate responses and rationales, evaluated automatically and by native speakers, we uncover performance disparities across languages and tasks critical to intelligent agents, such as object recognition, scene understanding, and relationship understanding. We conducted evaluations of ten VLMs with VLURes. The best performing model, GPT-4o, achieves an overall accuracy of 90.8% and lags human performance by 6.7%, though the gap is larger for open-source models. The gap highlights VLURes' critical role in developing intelligent agents to tackle multi-modal visual reasoning.
- Africa > Eswatini (0.15)
- Asia > Thailand > Bangkok > Bangkok (0.04)
- Africa > Tanzania > Dar es Salaam Region > Dar es Salaam (0.04)
- (19 more...)
Automatic Speech Recognition (ASR) for African Low-Resource Languages: A Systematic Literature Review
Imam, Sukairaj Hafiz, Belay, Tadesse Destaw, Husse, Kedir Yassin, Ahmad, Ibrahim Said, Abdulmumin, Idris, Umar, Hadiza Ali, Bello, Muhammad Yahuza, Nakatumba-Nabende, Joyce, Yimam, Seid Muhie, Muhammad, Shamsuddeen Hassan
ASR has achieved remarkable global progress, yet African low-resource languages remain rigorously underrepresented, producing barriers to digital inclusion across the continent with more than +2000 languages. This systematic literature review (SLR) explores research on ASR for African languages with a focus on datasets, models and training methods, evaluation techniques, challenges, and recommends future directions. We employ the PRISMA 2020 procedures and search DBLP, ACM Digital Library, Google Scholar, Semantic Scholar, and arXiv for studies published between January 2020 and July 2025. We include studies related to ASR datasets, models or metrics for African languages, while excluding non-African, duplicates, and low-quality studies (score <3/5). We screen 71 out of 2,062 records and we record a total of 74 datasets across 111 languages, encompassing approximately 11,206 hours of speech. Fewer than 15% of research provided reproducible materials, and dataset licensing is not clear. Self-supervised and transfer learning techniques are promising, but are hindered by limited pre-training data, inadequate coverage of dialects, and the availability of resources. Most of the researchers use Word Error Rate (WER), with very minimal use of linguistically informed scores such as Character Error Rate (CER) or Diacritic Error Rate (DER), and thus with limited application in tonal and morphologically rich languages. The existing evidence on ASR systems is inconsistent, hindered by issues like dataset availability, poor annotations, licensing uncertainties, and limited benchmarking. Nevertheless, the rise of community-driven initiatives and methodological advancements indicates a pathway for improvement. Sustainable development for this area will also include stakeholder partnership, creation of ethically well-balanced datasets, use of lightweight modelling techniques, and active benchmarking.
- Europe > Austria > Vienna (0.14)
- North America > United States (0.05)
- Africa > Niger (0.05)
- (21 more...)
- Overview (1.00)
- Research Report > New Finding (0.66)
- Research Report > Experimental Study (0.66)
- Health & Medicine (0.93)
- Media (0.68)
- Information Technology > Security & Privacy (0.46)
- Information Technology > Artificial Intelligence > Speech > Speech Recognition (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.74)
Artificially Fluent: Swahili AI Performance Benchmarks Between English-Trained and Natively-Trained Datasets
As large language models (LLMs) expand multilingual capabilities, questions remain about the equity of their performance across languages. While many communities stand to benefit from AI systems, the dominance of English in training data risks disadvantaging non-English speakers. To test the hypothesis that such data disparities may affect model performance, this study compares two monolingual BERT models: one trained and tested entirely on Swahili data, and another on comparable English news data. To simulate how multilingual LLMs process non-English queries through internal translation and abstraction, we translated the Swahili news data into English and evaluated it using the English-trained model. This approach tests the hypothesis by evaluating whether translating Swahili inputs for evaluation on an English model yields better or worse performance compared to training and testing a model entirely in Swahili, thus isolating the effect of language consistency versus cross-lingual abstraction. The results prove that, despite high-quality translation, the native Swahili-trained model performed better than the Swahili-to-English translated model, producing nearly four times fewer errors: 0.36% vs. 1.47% respectively. This gap suggests that translation alone does not bridge representational differences between languages and that models trained in one language may struggle to accurately interpret translated inputs due to imperfect internal knowledge representation, suggesting that native-language training remains important for reliable outcomes. In educational and informational contexts, even small performance gaps may compound inequality. Future research should focus on addressing broader dataset development for underrepresented languages and renewed attention to multilingual model evaluation, ensuring the reinforcing effect of global AI deployment on existing digital divides is reduced.
- North America > United States > Illinois (0.05)
- Africa > Kenya > Nairobi City County > Nairobi (0.05)
- Oceania > Australia (0.04)
- (2 more...)
LAVA: Language Model Assisted Verbal Autopsy for Cause-of-Death Determination
Chen, Yiqun T., McCormick, Tyler H., Liu, Li, Datta, Abhirup
Verbal autopsy (VA) is a critical tool for estimating causes of death in resource-limited settings where medical certification is unavailable. This study presents LA-VA, a proof-of-concept pipeline that combines Large Language Models (LLMs) with traditional algorithmic approaches and embedding-based classification for improved cause-of-death prediction. Using the Population Health Metrics Research Consortium (PHMRC) dataset across three age categories (Adult: 7,580; Child: 1,960; Neonate: 2,438), we evaluate multiple approaches: GPT-5 predictions, LCVA baseline, text embed-dings, and meta-learner ensembles. Our results demonstrate that GPT-5 achieves the highest individual performance with average test site accuracies of 48.6% (Adult), 50.5% (Child), and 53.5% (Neonate), outperforming traditional statistical machine learning baselines by 5-10%. Our findings suggest that simple off-the-shelf LLM-assisted approaches could substantially improve verbal autopsy accuracy, with important implications for global health surveillance in low-resource settings.
- Africa > Mozambique > Cabo Delgado Province > Pemba (0.05)
- North America > Mexico > Mexico City > Mexico City (0.04)
- Asia > Philippines > Visayas > Central Visayas > Province of Bohol (0.04)
- (3 more...)
Generalizable AI Model for Indoor Temperature Forecasting Across Sub-Saharan Africa
Akhtar, Zainab, Jengo, Eunice, Haßler, Björn
This study presents a lightweight, domain-informed AI model for predicting indoor temperatures in naturally ventilated schools and homes in Sub-Saharan Africa. The model extends the Temp-AI-Estimator framework, trained on Tanzanian school data, and evaluated on Nigerian schools and Gambian homes. It achieves robust cross-country performance using only minimal accessible inputs, with mean absolute errors of 1.45°C for Nigerian schools and 0.65°C for Gambian homes. These findings highlight AI's potential for thermal comfort management in resource-constrained environments.
- Africa > Sub-Saharan Africa (0.61)
- Africa > The Gambia (0.10)
- Africa > Tanzania > Dodoma Region > Dodoma (0.04)
- (4 more...)
SenWiCh: Sense-Annotation of Low-Resource Languages for WiC using Hybrid Methods
Goworek, Roksana, Karlcut, Harpal, Shezad, Muhammad, Darshana, Nijaguna, Mane, Abhishek, Bondada, Syam, Sikka, Raghav, Mammadov, Ulvi, Allahverdiyev, Rauf, Purighella, Sriram, Gupta, Paridhi, Ndegwa, Muhinyia, Dubossarsky, Haim
This paper addresses the critical need for high-quality evaluation datasets in low-resource languages to advance cross-lingual transfer. While cross-lingual transfer offers a key strategy for leveraging multilingual pretraining to expand language technologies to understudied and typologically diverse languages, its effectiveness is dependent on quality and suitable benchmarks. We release new sense-annotated datasets of sentences containing polysemous words, spanning ten low-resource languages across diverse language families and scripts. To facilitate dataset creation, the paper presents a demonstrably beneficial semi-automatic annotation method. The utility of the datasets is demonstrated through Word-in-Context (WiC) formatted experiments that evaluate transfer on these low-resource languages. Results highlight the importance of targeted dataset creation and evaluation for effective polysemy disambiguation in low-resource settings and transfer studies. The released datasets and code aim to support further research into fair, robust, and truly multilingual NLP.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- Europe > Germany > Saxony > Leipzig (0.05)
- (21 more...)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.70)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.67)
- (3 more...)